Building a computer that can support artificial intelligence at the scale and complexity of the human brain will be a colossal engineering effort. Now researchers at the National Institute of Standards and Technology have outlined how they think we’ll get there.
How, when, and whether we’ll ever create machines that can match our cognitive capabilities is a topic of heated debate among both computer scientists and philosophers. One of the most contentious questions is the extent to which the solution needs to mirror our best example of intelligence so far: the human brain.
Rapid advances in AI powered by deep neural networks—which despite their name operate very differently than the brain—have convinced many that we may be able to achieve “artificial general intelligence” without mimicking the brain’s hardware or software.
Others think we’re still missing fundamental aspects of how intelligence works, and that the best way to fill the gaps is to borrow from nature. For many that means building “neuromorphic” hardware that more closely mimics the architecture and operation of biological brains.
The problem is that the existing computer technology we have at our disposal looks very different from biological information processing systems, and operates on completely different principles. For a start, modern computers are digital and neurons are analog. And although both rely on electrical signals, they come in very different flavors, and the brain also uses a host of chemical signals to carry out processing.
Now though, researchers at NIST think they’ve found a way to combine existing technologies in a way that could mimic the core attributes of the brain. Using their approach, they outline a blueprint for a “neuromorphic supercomputer” that could not only match, but surpass the physical limits of biological systems.
The key to their approach, outlined in Applied Physics Letters, is a combination of electronics and optical technologies. The logic is that electronics are great at computing, while optical systems can transmit information at the speed of light, so combining them is probably the best way to mimic the brain’s excellent computing and communication capabilities.
It’s not a new idea, but so far getting our best electronic and optical hardware to gel has proven incredibly tough. The team thinks they’ve found a potential workaround, dropping the temperature of the system to negative 450 degrees Fahrenheit.
While that might seem to only complicate matters, it actually opens up a host of new hardware possibilities. There are a bunch of high-performance electronic and optical components that only work at these frigid temperatures, like superconducting electronics, single-photon detectors, and silicon LEDs.
The researchers propose using these components to build artificial neurons that operate more like their biological cousins than conventional computer components, firing off electrical impulses, or spikes, rather than shuttling numbers around.
Each neuron has thousands of artificial synapses made from single photon detectors, which pick up optical messages from other neurons. These incoming signals are combined and processed by superconducting circuits, and once they cross a certain threshold a silicon LED is activated, sending an optical impulse to all downstream neurons.
The researchers envisage combining millions of these neurons on 300-millimeter silicon wafers and then stacking the wafers to create a highly interconnected network that mimics the architecture of the brain, with short-range connections dealt with by optical waveguides on each chip and long-range ones dealt with by fiber optic cables.
They acknowledge that the need to cryogenically cool the entire device is a challenge. But they say the improved power efficiency of their design should cancel out the cost of this cooling, and a system on the scale of the human brain should require no more power or space than a modern supercomputer. They also point out that there is significant R&D going into cryogenically-cooled quantum computers, which they could likely piggyback off of.
Some of the basic components of the system have already been experimentally demonstrated by the researchers, though they admit there’s still a long way to go to put all the pieces together. While many of these components are compatible with standard electronics fabrication, finding ways to manufacture them cheaply and integrate them will be a mammoth task.
Perhaps more important is the question of what kind of software the machine would run. It’s designed to implement “spiking neural networks” similar to those found in the brain, but our understanding of biological neural networks is still rudimentary, and our ability to mimic them is even worse. While both scientists and tech companies have been experimenting with the approach, it is still far less capable than deep learning.
Given the enormous engineering challenge involved in building a device of this scale, it may be a while before this blueprint makes it off the drawing board. But the proposal is an intriguing new chapter in the hunt for artificial general intelligence.
Image Credit: InspiredImages from Pixabay